Goto

Collaborating Authors

 training and testing




5cebc89b113920dbff7c79854ba765a3-Supplemental-Conference.pdf

Neural Information Processing Systems

Figure 11: RecBeerenvironments: Each year serves as a different environment, whose affect is expressed through differing correlations between beer types and user choices.




Semi-Open 3D Object Retrieval via Hierarchical Equilibrium on Hypergraph

Neural Information Processing Systems

Existing open-set learning methods consider only the single-layer labels of objects and strictly assume no overlap between the training and testing sets, leading to contradictory optimization for superposed categories. In this paper, we introduce a more practical Semi-Open Environment setting for open-set 3D object retrieval with hierarchical labels, in which the training and testing set share a partial label space for coarse categories but are completely disjoint from fine categories. We propose the Hypergraph-Based Hierarchical Equilibrium Representation (HERT) framework for this task. Specifically, we propose the Hierarchical Retrace Embedding (HRE) module to overcome the global disequilibrium of unseen categories by fully leveraging the multi-level category information. Besides, tackling the feature overlap and class confusion problem, we perform the Structured Equilibrium Tuning (SET) module to utilize more equilibrial correlations among objects and generalize to unseen categories, by constructing a superposed hypergraph based on the local coherent and global entangled correlations. Furthermore, we generate four semi-open 3DOR datasets with multi-level labels for benchmarking. Results demonstrate that the proposed method can effectively generate the hierarchical embeddings of 3D objects and generalize them towards semi-open environments.


Supplementary Material for Bootstrapping Neural Processes

Neural Information Processing Systems

We sampled 100 GP prior functions from zero mean and unit variance. After realizing them, the prior functions are used to optimize via Bayesian optimization. All the experiments are implemented with [8]. Same as Appendix B.1, except that all the models were trained for 200 The other details are the same as in Appendix B.1. Seen classes (0-9) Unseen classes (10-46) t -noise CE sharpness CE Sharpness CE Sharpness CNP 0.448 We also measure the sharpness [10] which essentially is a average prediction variance.